Human Halcynation
@nishio: delusion that "AI will tell me the truth" and assume the output is the truth, or when you realize it is not the truth, "AI lied!" and being delusional as if the AI had unethical intentions seems to me to be a human harcination. @nishio: list of common human halcyonations 1: Assume that the LLM always outputs the truth
2: Consider that the LLM intended to lie when it outputs something that is not true.
3: Assume that the LLM knows later than the training data.
What else is there to say?
nishio example of human halucination "A system trained with the right information will produce the right information." >noricoco: I commented on this article this morning in Nikkei Think! Many reporters seem to think that "by learning misinformation, chatGPT makes mistakes", but mechanically, maybe not, but from the output state, chatGPT itself fabricates? I think there is a high possibility that the chatGPT itself is fabricating the information.
---
This page is auto-translated from /nishio/人間のハルシネーション using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.